An inexact restoration-nonsmooth algorithm with variable accuracy for stochastic nonsmooth convex optimization problems in machine learning and stochastic linear complementarity problems
نویسندگان
چکیده
We study unconstrained optimization problems with nonsmooth and convex objective function in the form of a mathematical expectation. The proposed method approximates expected sample average using Inexact Restoration-based adapted sizes. size is chosen an adaptive manner based on Restoration. algorithm uses line search assumes descent directions respect to current approximate function. prove a.s. convergence under standard assumptions. Numerical results for two types problems, machine learning loss training classifiers stochastic linear complementarity efficiency scheme.
منابع مشابه
Stochastic Coordinate Descent for Nonsmooth Convex Optimization
Stochastic coordinate descent, due to its practicality and efficiency, is increasingly popular in machine learning and signal processing communities as it has proven successful in several large-scale optimization problems , such as l1 regularized regression, Support Vector Machine, to name a few. In this paper, we consider a composite problem where the nonsmoothness has a general structure that...
متن کاملAn Unconstrained Optimization Technique for Nonsmooth Nonlinear Complementarity Problems
In this article, we consider an unconstrained minimization formulation of the nonlinear complementarity problem NCP(f) when the underlying functions are H-differentiable but not necessarily locally Lipschitzian or directionally differentiable. We show how, under appropriate regularity conditions on an H-differential of f , minimizing the merit function corresponding to f leads to a solution of ...
متن کاملBenson's algorithm for nonconvex multiobjective problems via nonsmooth Wolfe duality
In this paper, we propose an algorithm to obtain an approximation set of the (weakly) nondominated points of nonsmooth multiobjective optimization problems with equality and inequality constraints. We use an extension of the Wolfe duality to construct the separating hyperplane in Benson's outer algorithm for multiobjective programming problems with subdifferentiable functions. We also fo...
متن کاملStochastic ADMM for Nonsmooth Optimization
Alternating Direction Method of Multipliers (ADMM) gained lost of attention due to LargeScale Machine Learning demands. • Classic (70’s) and flexible, Survey paper: (Boyd 2009) • Applications: compressed sensing (Yang & Zhang, 2011), image restoration (Goldstein & Osher, 2009), video processing and matrix completion (Goldfarb et al., 2010) • Recent variations: Linearized (Goldfarb et al., 2010;...
متن کاملRandomized Block Subgradient Methods for Convex Nonsmooth and Stochastic Optimization
Block coordinate descent methods and stochastic subgradient methods have been extensively studied in optimization and machine learning. By combining randomized block sampling with stochastic subgradient methods based on dual averaging ([22, 36]), we present stochastic block dual averaging (SBDA)—a novel class of block subgradient methods for convex nonsmooth and stochastic optimization. SBDA re...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Journal of Computational and Applied Mathematics
سال: 2023
ISSN: ['0377-0427', '1879-1778', '0771-050X']
DOI: https://doi.org/10.1016/j.cam.2022.114943